Spatial and temporal factors during processing of audiovisual speech: a PET study.

نویسندگان

  • E Macaluso
  • N George
  • R Dolan
  • C Spence
  • J Driver
چکیده

Speech perception can use not only auditory signals, but also visual information from seeing the speaker's mouth. The relative timing and relative location of auditory and visual inputs are both known to influence crossmodal integration psychologically, but previous imaging studies of audiovisual speech focused primarily on just temporal aspects. Here we used Positron Emission Tomography (PET) during audiovisual speech processing to study how temporal and spatial factors might jointly affect brain activations. In agreement with previous work, synchronous versus asynchronous audiovisual speech yielded increased activity in multisensory association areas (e.g., superior temporal sulcus [STS]), plus in some unimodal visual areas. Our orthogonal manipulation of relative stimulus position (auditory and visual stimuli presented at same location vs. opposite sides) and stimulus synchrony showed that (i) ventral occipital areas and superior temporal sulcus were unaffected by relative location; (ii) lateral and dorsal occipital areas were selectively activated for synchronous bimodal stimulation at the same external location; (iii) right inferior parietal lobule was activated for synchronous auditory and visual stimuli at different locations, that is, in the condition classically associated with the 'ventriloquism effect' (shift of perceived auditory position toward the visual location). Thus, different brain regions are involved in different aspects of audiovisual integration. While ventral areas appear more affected by audiovisual synchrony (which can influence speech identification), more dorsal areas appear to be associated with spatial multisensory interactions.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Role of Speech Production System in Audiovisual Speech Perception

Seeing the articulatory gestures of the speaker significantly enhances speech perception. Findings from recent neuroimaging studies suggest that activation of the speech motor system during lipreading enhance speech perception by tuning, in a top-down fashion, speech-sound processing in the superior aspects of the posterior temporal lobe. Anatomically, the superior-posterior temporal lobe areas...

متن کامل

Audiovisual speech integration: modulatory factors and the link to sound symbolism

In this talk, I will review some of the latest findings from the burgeoning literature on the audiovisualintegration of speech stimuli. I will focus on those factors that have been demonstrated to influence thisform of multisensory integration (such as temporal coincidence, speaker/gender matching, andattention; Vatakis & Spence, 2007, 2010). I will also look at a few of the oth...

متن کامل

Comprehension of audiovisual speech: Data-based sorting of independent components of fMRI activity

To find out whether decreased intelligibility of natural audiovisual speech would enhance the involvement of the sensorimotor speech comprehension network, we let ten adults to listen and view a video, in which the intelligibility of continuous audio-visual speech was varied by altering the loudness between well-audible, just audible (–18 dB weaker than loud), and mute. Occasionally the voice a...

متن کامل

Physical and perceptual factors shape the neural mechanisms that integrate audiovisual signals in speech comprehension.

Face-to-face communication challenges the human brain to integrate information from auditory and visual senses with linguistic representations. Yet the role of bottom-up physical (spectrotemporal structure) input and top-down linguistic constraints in shaping the neural mechanisms specialized for integrating audiovisual speech signals are currently unknown. Participants were presented with spee...

متن کامل

Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus

The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, v...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • NeuroImage

دوره 21 2  شماره 

صفحات  -

تاریخ انتشار 2004